Moderate: Red Hat Ceph Storage 6.1 security and bug fix update

Related Vulnerabilities: CVE-2021-4231   CVE-2022-31129  

概述

Moderate: Red Hat Ceph Storage 6.1 security and bug fix update

类型/严重性

Security Advisory: Moderate

Red Hat Insights 补丁分析

识别并修复受此公告影响的系统。

查看受影响的系统

标题

New packages for Red Hat Ceph Storage 6.1 are now available on Red Hat Enterprise Linux.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

描述

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

These new packages include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the
most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6.1/html/release_notes/index

Security Fix(es):

  • moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
  • angular: XSS vulnerability (CVE-2021-4231)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

All users of Red Hat Ceph Storage are advised to update to these packages that provide numerous enhancements and bug fixes.

解决方案

For details on how to apply this update, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Storage Ceph Upgrade Guide (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage).

受影响的产品

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for IBM z Systems 9 s390x
  • Red Hat Enterprise Linux for Power, little endian 9 ppc64le

修复

  • BZ - 1467648 - [RFE] support x-amz-replication-status for multisite
  • BZ - 1600995 - rgw_user_max_buckets is not applied to non-rgw users
  • BZ - 1783271 - [RFE] support for key rotation
  • BZ - 1794550 - [Graceful stop/restart/shutdown] multiple ceph admin sockets
  • BZ - 1929760 - [RFE] [Ceph-Dashboard] [Ceph-mgr] Dashboard to display per OSD slow op counter and type of slow op
  • BZ - 1932764 - [RFE] Bootstrap console logs are through STDERR stream
  • BZ - 1937618 - [CEE][RGW]Bucket policies disappears in archive zone when an object is inserted in master zone bucket
  • BZ - 1975689 - Listing of snapshots are not always successful on nfs exports
  • BZ - 1991808 - [rgw-multisite][LC]: LC rules applied from the master do not run on the slave.
  • BZ - 2004175 - [RGW][Notification][kafka][MS]: arn not populated with zonegroup in event record
  • BZ - 2016288 - [RFE] Defining a zone-group when deploying RGW service with cephadm
  • BZ - 2016949 - [RADOS]: OSD add command has no return error/alert message to convey OSD not added with wrong hostname
  • BZ - 2024444 - [rbd-mirror] Enabling mirroring on image in a namespace falsely fails saying cannot enable mirroring in current pool mirroring mode
  • BZ - 2025815 - [RFE] RBD Mirror Geo-replication metrics
  • BZ - 2028058 - [RFE][Ceph Dashboard] Add alert panel in the front dashboard
  • BZ - 2029714 - ceph --version command reports incorrect ceph version in 5.x post upgrade from 4.2 when compared with ceph version output
  • BZ - 2036063 - [GSS][Cephadm][Add the deletion of the cluster logs in the cephadm rm-cluster]
  • BZ - 2053347 - [RFE] [RGW-MultiSite] [Notification] bucket notification types for replication events (S3 notifications extension, upstream)
  • BZ - 2053471 - dashboard: add support for Ceph Authx (client auth mgmt)
  • BZ - 2064260 - [GSS][RFE] Support for AWS PublicAccessBlock
  • BZ - 2064265 - [GSS][RFE] Feature to disable the ability to set lifecycle policies
  • BZ - 2067709 - [RFE] Add metric relative to osd blocklist
  • BZ - 2076709 - per host ceph-exporter daemon
  • BZ - 2080926 - [cephadm][ingress]: AssertionError seen upon restarting haproxy and keepalived using service name
  • BZ - 2082666 - [cee/sd][RGW] Bucket notification: http endpoints with one trailing slash in the push-endpoint URL failed to create topic
  • BZ - 2092506 - [cephadm] orch upgrade status help message is not apt
  • BZ - 2094052 - CVE-2021-4231 angular: XSS vulnerability
  • BZ - 2097027 - [cee/sd][ceph-dasboard] pool health on primary site shows error for one way rbd_mirror configuration
  • BZ - 2097187 - Unable to redeploy the active mgr instance via "ceph orch daemon redeploy <mgr> <img>" command
  • BZ - 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
  • BZ - 2105950 - [RHOS17][RFE] RGW does not support get object with temp_url using SHA256 digest (required for FIPS)
  • BZ - 2106421 - [rbd-mirror]: mirror image status : non-primary : description : syncing_percent showing invalid value (3072000)
  • BZ - 2108228 - sosreport logs from ODF cluster mangled
  • BZ - 2108489 - [CephFS metadata information missing during a Ceph upgrade]
  • BZ - 2109224 - [RFE] deploy custom RGW realm/zone using orchestrator and a specification file
  • BZ - 2110290 - Multiple "CephPGImbalance" alerts on Dashboard
  • BZ - 2111282 - Misleading information displayed using osd_mclock_max_capacity_iops_[hdd, ssd] command.
  • BZ - 2111364 - [rbd_support] recover from RADOS instance blocklisting
  • BZ - 2111680 - cephadm --config initial-ceph.conf no longer supports comma delimited networks for routed traffic
  • BZ - 2111751 - [ceph-dashboard] In expand cluster create osd default selected as recommended not working
  • BZ - 2112309 - [cee/sd][cephadm]Getting the warning "Unable to parse <spec>.yml succesfully" while bootstrapping
  • BZ - 2114835 - prometheus reports an error during evaluation of CephPoolGrowthWarning alert rule
  • BZ - 2120624 - don't leave an incomplete primary snapshot if the peer who is handling snapshot creation dies
  • BZ - 2124441 - [cephadm] osd spec crush_device_class and host identifier "location"
  • BZ - 2127345 - [RGW MultiSite] : during upgrade 2 rgw(out of 6) had Segmentation fault
  • BZ - 2127926 - [RGW][MS]: bucket sync markers fails with ERROR: sync.read_sync_status() returned error=0
  • BZ - 2129861 - [cee/sd][ceph-dashboard] Unable to access dashboard when enabling the "url_prefix" in RHCS 5.2 dashboard configuration
  • BZ - 2132554 - [RHCS 5.3][Multisite sync policies: disabling per-bucket replication doesn't work if the zones replicate]
  • BZ - 2133341 - [RFE] [RBD Mirror] Support force promote an image for RBD mirroring through dashboard
  • BZ - 2133549 - [CEE] dashboard binds to host.containers.internal with podman-4.1.1-2.module+el8.6.0+15917+093ca6f8.x86_64
  • BZ - 2133802 - [RGW] RFE: Enable the Ceph Mgr RGW module
  • BZ - 2136031 - cephfs-top -d <seconds> not working as expected
  • BZ - 2136304 - [cee][rgw] Upgrade to 4.3z1 with vault results in (AccessDenied) failures when accessing buckets.
  • BZ - 2136336 - [cee/sd][Cephadm] ceph mgr is filling up the log messages "Detected new or changed devices" for all OSD nodes every 30 min un-neccessarily
  • BZ - 2137596 - [RGW] Suspending bucket versioning in primary/secondary zone also suspends bucket versioning in the archive zone
  • BZ - 2138793 - make cephfs-top display scroll-able like top(1) and fix the blank screen for great number of clients
  • BZ - 2138794 - [RGW][The 'select object content' API is not working as intended for CSV files]
  • BZ - 2138933 - [RGW]: Slow object expiration observed with LC
  • BZ - 2139694 - RGW cloud Transition. Found Errors during transition when using MCG Azure Namespacestore with a pre-created bucket
  • BZ - 2139769 - [ceph-dashboard] rbd mirror sync progress shows empty
  • BZ - 2140074 - [cee/sd][cephfs][dashboard]While evicting one client via ceph dashboard, it evicts all other client mounts of the ceph filesystem
  • BZ - 2140784 - [CEE] cephfs mds crash /builddir/build/BUILD/ceph-16.2.8/src/mds/Server.cc: In function 'CDentry* Server::prepare_stray_dentry(MDRequestRef&, CInode*)' thread 7feb58dcd700 time 2022-11-06T13:26:27.233738+0000
  • BZ - 2141110 - [RFE] Improve handling of BlueFS ENOSPC
  • BZ - 2142167 - [RHCS 6.x] OSD crashes due to suicide timeout in rgw gc object class code, need assistance for core analysis
  • BZ - 2142431 - [RFE] Enabling additional metrics in node-exporter container
  • BZ - 2143285 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters
  • BZ - 2145104 - [ceph-dashboard] unable to create snapshot of an image using dashboard
  • BZ - 2146544 - [RFE] Provide support for labeled perf counters in Ceph Exporter
  • BZ - 2146546 - [RFE] Refactor RBD mirror metrics to use new labeled performance counter
  • BZ - 2147346 - [RFE] New metric to provide rbd mirror image status and snapshot replication information
  • BZ - 2147348 - [RFE] Add additional fields about image status in rbd mirror comands
  • BZ - 2149259 - [RGW][Notification][Kafka]: wrong event timestamp seen as 0.000000 for multipart upload events in event record
  • BZ - 2149415 - [cephfs][nfs] "ceph nfs cluster info" shows does not exist cluster
  • BZ - 2149533 - [RFE - Stretch Cluster] Provide way for Cephadm orch to deploy new Monitor daemons with "crush_location" attribute
  • BZ - 2151189 - [cephadm] DriveGroup can't handle multiple crush_device_classes
  • BZ - 2152963 - ceph cluster upgrade failure/handling report with offline hosts needs to be improved
  • BZ - 2153196 - snap-schedule add command is failing when subvolume argument is provided
  • BZ - 2153452 - [6.0][sse-s3][bucket-encryption]: Multipart object uploads are not encrypted, even though bucket encryption is set on a bucket
  • BZ - 2153533 - [RGW][Notification][kafka]: object size 0 seen in event record upon lifecycle expiration event
  • BZ - 2153673 - snapshot schedule stopped on one image and mirroring stopped on secondary images while upgrading from 16.2.10-82 to 16.2.10-84
  • BZ - 2153726 - [RFE] On the Dashboard -> Cluster -> Monitoring page, source url of prometheus is in format http://hostname:9095 which doesn't work when you click.
  • BZ - 2158689 - cephfs-top: new options to sort and limit
  • BZ - 2159294 - Large Omap objects found in pool 'ocs-storagecluster-cephfilesystem-metadata'
  • BZ - 2159307 - mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
  • BZ - 2160598 - [GSS] MDSs are read only, after commit error on cache.dir(0x1)
  • BZ - 2161479 - MDS: scan_stray_dir doesn't walk through all stray inode fragment
  • BZ - 2161483 - mds: md_log_replay thread (replay thread) can remain blocked
  • BZ - 2163473 - [Workload-DFG] small object recovery, backfill too slow and low client throughput!
  • BZ - 2164327 - [Ceph-Dashboard] Hosts page flickers on auto refresh
  • BZ - 2168541 - mon: prevent allocating snapids allocated for CephFS
  • BZ - 2172791 - mds: make num_fwd and num_retry to __u32
  • BZ - 2175307 - [RFE] Catch MDS damage to the dentry's first snapid
  • BZ - 2180110 - cephadm: reduce spam to cephadm.log
  • BZ - 2180567 - rebase ceph to 17.2.6
  • BZ - 2181055 - [rbd-mirror] RPO not met when adding latency between clusters
  • BZ - 2182022 - [RGW multisite][Archive zone][Duplicate objects in the archive zone]
  • BZ - 2182035 - [RHCS 6.0][Cephadm][Permission denied errors upgrading to RHCS 6]
  • BZ - 2182564 - mds: force replay sessionmap version
  • BZ - 2182613 - client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
  • BZ - 2184268 - [RGW][Notification][Kafka]: persistent notifications not seen after kafka is up for events happened when kafka is down
  • BZ - 2185588 - [CEE/sd][Ceph-volume] wrong block_db_size computed when adding OSD
  • BZ - 2185772 - [Ceph-Dashboard] Fix issues in the rhcs 6.1 branding
  • BZ - 2186095 - [Ceph Dashboard]: Upgrade the grafana version to latest
  • BZ - 2186126 - [RFE] Recovery Throughput Metrics to Dashboard Landing page
  • BZ - 2186472 - [RGW Multisite]: If cloud transition happens on primary of multisite , secondary has no metadata of the object
  • BZ - 2186557 - Metrics names produced by Ceph exporter differ form the name produced by Prometheus manager module
  • BZ - 2186738 - [CEE/sd][ceph-monitoring][node-exporter] node-exporter on a fresh installation is crashing due to `panic: "node_rapl_package-0-die-0_joules_total" is not a valid metric name`
  • BZ - 2186760 - Getting 411, missing content length error for PutObject operations for clients accessing via aws-sdk in RHCS5 cluster
  • BZ - 2186774 - [RHCS 5.3z1][Cannot run `bucket stats` command on deleted buckets in the AZ]
  • BZ - 2187265 - [Dashboard] Landing page has a hyperlink for Manager page even though it does not exist
  • BZ - 2187394 - [RGW CloudTransition] tier configuration incorrectly parses keys starting with digit
  • BZ - 2187617 - [6.1][rgw-ms] Writing on a bucket with num_shards 0 causes sync issues and rgws to segfault on the replication site.
  • BZ - 2187659 - ceph fs snap-schedule listing is failing
  • BZ - 2188266 - In OSP17.1 with Ceph Storage 6.0 object_storage tests fail with Unauthorized
  • BZ - 2188460 - MDS Behind on trimming (145961/128) max_segments: 128, num_segments: 145961
  • BZ - 2189308 - [RGW][Notification][Kafka]: bucket owner not in event record and received object size 0 for s3:ObjectSynced:Create event
  • BZ - 2190412 - [cee/sd][cephadm][testfix] Zapping OSDs on Hosts deployed with Ceph RHCS 4.2z4 or before does not work after upgrade to RHCS 5.3z2 testfix
  • BZ - 2196421 - update nfs-ganesha to V5.1 in RHCS 6.1
  • BZ - 2196920 - Bring in ceph-mgr module framework dependencies for BZ 2111364
  • BZ - 2203098 - [Dashboard] Red Hat Logo on the welcome page is too large
  • BZ - 2203160 - [rbd_support] recover from "double blocklisting" (being blocklisted while recovering from blocklisting)
  • BZ - 2203747 - Running cephadm-distribute-ssh-key.yml will require ansible.posix collection package downstream
  • BZ - 2204479 - Ceph Common: "rgw-orphan-list" and "ceph-diff-sorted" missing from package
  • BZ - 2207702 - RGW server crashes when using S3 PutBucketReplication API
  • BZ - 2207718 - [RGW][notification][kafka]: segfault observed when bucket is configured with incorrect kafka broker
  • BZ - 2209109 - [Ceph Dashboard]: fix pool_objects_repaired and daemon_health_metrics format
  • BZ - 2209300 - [Dashboard] Refresh and information button misaligned on the Overall performance page
  • BZ - 2209375 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep
  • BZ - 2209970 - [ceph-dashboard] snapshot create button got disabled in ceph dashboard
  • BZ - 2210698 - [Dashboard] User with read-only permission cannot access the Dashboard landing page